Evaluation of gradient descent learning algorithms with adaptive and local learning rate for recognising hand-written numerals
نویسندگان
چکیده
Gradient descent learning algorithms, namely Back Propagation (BP), can significantly increase the classification performance of Multi Layer Perceptrons adopting a local and adaptive learning rate management approach. In this paper, we present the comparison of the performance on hand-written characters classification of two BP algorithms, implementing fixed and adaptive learning rate. The results show that the validation error and average number of learning iterations are lower for the adaptive learning rate BP algorithm.
منابع مشابه
Designing stable neural identifier based on Lyapunov method
The stability of learning rate in neural network identifiers and controllers is one of the challenging issues which attracts great interest from researchers of neural networks. This paper suggests adaptive gradient descent algorithm with stable learning laws for modified dynamic neural network (MDNN) and studies the stability of this algorithm. Also, stable learning algorithm for parameters of ...
متن کاملImproving the Convergence of the Backpropagation Algorithm Using Learning Rate Adaptation Methods
This article focuses on gradient-based backpropagation algorithms that use either a common adaptive learning rate for all weights or an individual adaptive learning rate for each weight and apply the Goldstein/Armijo line search. The learning-rate adaptation is based on descent techniques and estimates of the local Lipschitz constant that are obtained without additional error function and gradi...
متن کاملPosition Control of a Pulse Width Modulated Pneumatic Systems: an Experimental Comparison
In this study, a new adaptive controller is proposed for position control of pneumatic systems. Difficulties associated with the mathematical model of the system in addition to the instability caused by Pulse Width Modulation (PWM) in the learning-based controllers using gradient descent, motivate the development of a new approach for PWM pneumatics. In this study, two modified Feedback Error L...
متن کاملA Framework for the Development of Globally Convergent Adaptive Learning Rate Algorithms
In this paper we propose a framework for developing globally convergent batch training algorithms with adaptive learning rate. The proposed framework provides conditions under which global convergence is guaranteed for adaptive learning rate training algorithms. To this end, the learning rate is appropriately tuned along the given descent direction. Providing conditions regarding the search dir...
متن کاملA Differential Evolution and Spatial Distribution based Local Search for Training Fuzzy Wavelet Neural Network
Abstract Many parameter-tuning algorithms have been proposed for training Fuzzy Wavelet Neural Networks (FWNNs). Absence of appropriate structure, convergence to local optima and low speed in learning algorithms are deficiencies of FWNNs in previous studies. In this paper, a Memetic Algorithm (MA) is introduced to train FWNN for addressing aforementioned learning lacks. Differential Evolution...
متن کامل